Google "helping for abuse"? Advertising spreads misinformation about the new crown epidemic and makes a profit from it.
AdSense and DoubleClick are making money by advertising and profiting from the misinformation about the new crown outbreak and outbreak.
Tencent Technology News on June 12, according to foreign media reports, two independent research team found that Google's programmatic advertising tools AdSense and DoubleClick are publishing misinformation about the new crown outbreak and outbreak of the site advertising, and profit from it.
By allowing these sites to monetize content, Google is also making money by allowing them to monetize content, according to Daniel E. Stevens, executive director of the campaign for accountability campaign. The agency publishes critical research on tech giants through the Tech Transparency Project. "Despite Google's public commitment, it will not cut off the channels for the flow of ads and revenue streams to snake oil salesmen to promote misinformation about the new crown virus," Stevens said in a statement. "
Independent researchers from the Technology Transparency Project and the Global Disinformation Index have published reports detailing when and where Google placed ads next to health misinformation on third-party websites. In its report, the Technology Transparency Project identified 97 sites that habitually publish false information and generate revenue using Google's advertising tools.
The Global Disinformation Index publishes monthly advertising data on conspiracy theory sites. In March, the index found that 1,400 sites in Europe that spread misinformation about the new crown had generated a total of $76m in advertising tools, most of which came from Google. Both agencies say Google is profiting from health conspiracy theories while publicly promising to crack down on misinformation about the new crown.
"We have a strict publisher policy that governs the behavior of their advertising content," said Christa Muldoon, a Google representative. In particular, we prohibit publishers from misrepresenting themselves or their products and taking aggressive steps against content that directly harms users or disseminates medical misinformation. When a page or website violates our policies, we take immediate action to eliminate its profitability. "However, in these cases, Google's advertising sites or articles do not violate its policies."
In May and June, ads from One Medical, a primary care center, appeared on a conspiracy theory website called the Awakening Times, with the headline "New Crown Virus Vaccine is the source of the dangerous invasion." The article does not say that vaccines are directly harmful, but rather that instead mix in the same information sources as unaccurate sources, portray the vaccine as negative technology and cast doubt on future vaccines for the new coronavirus.
One Medical, which has now blacklisted the site, said: "OneMedical strives to be a reliable and clinically reviewed source of critical healthcare information, including the New Crown outbreak, and we take the fight against misinformation seriously." "In addition to One Medical, Google has also placed ads from ausanged AAA, AARP, Coronavirus.gov, the CDC's new virus information website, Geico, the fourth-largest U.S. auto insurer, online lending platform Lending Tree, car maker Subaru, UNICEF, and the U.S. Forest Service.
Historically, Google has opposed the regulation of disinformation on its own platforms and within its advertising networks. Since the outbreak of the new crown, however, the company has taken a more aggressive stance on content that could harm users. In April, Google pledged $6.5 million to fact-checkers and organizations to actively combat false information related to the new crown outbreak. The company has also taken steps on all of its platforms to place factual information at the top of the page, suppress suspicious claims and remove information that could harm human health. In adopting these new standards, Google has always been very careful about what content violates its policies and does not always have clear information about the risks to human health.
Some believe that anti-vaccine content meets this standard. Many later studies have shown that while anti-vaccine videos may not be many, they are often popular and may influence people's decisions. In a 2018 study on how anti-vaccine campaigns affect someone's vaccination decision, the researchers wrote: "When examining the rise and spread of the anti-vaccine movement, access to false anti-vaccine information online should not be underestimated." The paper concludes that the rise of the anti-vaccine movement poses a "terrible threat" to public health.
Last year, the World Health Organization listed vaccinations as one of its top 10 threats to global health, citing a surge in measles cases in the US and Europe. These views may also spread ambivalence about future vaccines. A recent Pew survey reported that while most people would like to get the new coronavirus vaccine, surprisingly, 27 percent of Americans won't.
Google seems to understand the dilemma, and it doesn't allow anti-vaccine content on YouTube to make money from advertising. Google also bans anti-vaccine content on its search platform and YouTube. But When it comes to advertising on its platform, Google's position seems less firm.
For example, Google may ban certain content on YouTube, but allow sites with the same content to monetize them on their advertising networks. In May, Google removed the YouTube channel of conspiracy theorist David Icke for unconfirmed comments about the new crown outbreak. However, the company continues to run ads on its website.
The Technology Transparency Project found a Google ad from Palo Alto Networks, a cybersecurity firm, on Ike's website, next to a video promoting false claims about the new crown virus. The group also found on Actist Post, another conspiracy theory site, that Google ads were next to a video interview with Ike.
Google's choice to keep these sites on its network has created a recurring problem for brands that don't want to be associated with fake health information or conspiracy theories. Overseeing fake news sites is a daunting task. A representative for One Medical said the process of monitoring where its ads appear was largely manual. The company censors the sites it advertises and then blacklists the sites that post health misinformation so that its ads don't continue to appear there. The company says The One Medical ads are aimed at individuals, not specific websites. This means that some ads may appear on websites that One Medical is trying to reach.
A One Medical spokesman said: "We regularly monitor emerging websites and misinformation sources and blacklist them from our advertising to prevent the spread of misinformation." This is part of our ongoing effort to mark and act quickly to minimize the situation in which our brand is presented side by side with inaccurate, malicious, offensive or illegal content. "
Both The AARP and UNICEF said they have revised their advertising policies in light of the reports released by the researchers. (Tencent Technology Review/Golden Deer)
Go to "Discovery" - "Take a Look" to browse "Friends are watching"